Kruskal–Wallis one-way analysis of variance

In statistics, the Kruskal–Wallis one-way analysis of variance by ranks (named after William Kruskal and W. Allen Wallis) is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing more than two samples that are independent, or not related. The parametric equivalence of the Kruskal-Wallis test is the one-way analysis of variance (ANOVA). The factual null hypothesis is that the populations from which the samples originate, have the same median. When the Kruskal-Wallis test leads to significant results, then at least one of the samples is different from the other samples. The test does not identify where the differences occur or how many differences actually occur. It is an extension of the Mann–Whitney U test to 3 or more groups.The Mann-Whitney would help analyze the specific sample pairs for significant differences.

Since it is a non-parametric method, the Kruskal–Wallis test does not assume a normal population, unlike the analogous one-way analysis of variance. However, the test does assume an identically-shaped and scaled distribution for each group, except for any difference in medians.

Contents

Method

  1. Rank all data from all groups together; i.e., rank the data from 1 to N ignoring group membership. Assign any tied values the average of the ranks they would have received had they not been tied.
  2. The test statistic is given by:
    K = (N-1)\frac{\sum_{i=1}^g n_i(\bar{r}_{i\cdot} - \bar{r})^2}{\sum_{i=1}^g\sum_{j=1}^{n_i}(r_{ij} - \bar{r})^2}, where:
    • n_i is the number of observations in group i
    • r_{ij} is the rank (among all observations) of observation j from group i
    • N is the total number of observations across all groups
    • \bar{r}_{i\cdot} = \frac{\sum_{j=1}^{n_i}{r_{ij}}}{n_i},
    • \bar{r} =\tfrac 12 (N%2B1) is the average of all the r_{ij}.
  3. Notice that the denominator of the expression for K is exactly (N-1)N(N%2B1)/12 and \bar{r}=\tfrac{N%2B1}{2}. Thus
    
\begin{align}
K & = \frac{12}{N(N%2B1)}\sum_{i=1}^g n_i \left(\bar{r}_{i\cdot} - \frac{N%2B1}{2}\right)^2 \\ & = \frac{12}{N(N%2B1)}\sum_{i=1}^g n_i \bar{r}_{i\cdot }^2 -\ 3(N%2B1)
\end{align}
    Notice that the last formula only contains the squares of the average ranks.
  4. A correction for ties can be made by dividing K by 1 - \frac{\sum_{i=1}^G (t_i^3 - t_i)}{N^3-N}, where G is the number of groupings of different tied ranks, and ti is the number of tied values within group i that are tied at a particular value. This correction usually makes little difference in the value of K unless there are a large number of ties.
  5. Finally, the p-value is approximated by \Pr(\chi^2_{g-1} \ge K). If some n_i values are small (i.e., less than 5) the probability distribution of K can be quite different from this chi-squared distribution. If a table of the chi-squared probability distribution is available, the critical value of chi-squared, \chi^2_{\alpha: g-1}, can be found by entering the table at g − 1 degrees of freedom and looking under the desired significance or alpha level. The null hypothesis of equal population medians would then be rejected if K \ge \chi^2_{\alpha: g-1}. Appropriate multiple comparisons would then be performed on the group medians.
  6. If the statistic is not significant, then no differences exist between the samples. However, if the test is significant then a difference exists between at least two of the samples. Therefore, a researcher might use sample contrasts between individual sample pairs, or post hoc tests, to determine which of the sample pairs are significantly different. When performing multiple sample contrasts, the Type I error rate tends to become inflated.

See also

References

External links

References

  1. ^ http://homepages.ucalgary.ca/~jefox/Kruskal%20and%20Wallis%201952.pdf